• Home
  • Introduction
  • Data Source
  • Data Visualization
  • Exploratory Data Analysis
  • ARMA/ARIMA/SARIMA Model
  • ARIMAX Model
  • Financial Time Series Model
  • Deep Learning for TS
  • Conclusion

Impact of Macroeconomic Factors on ExxonMobil Stock Price

One of the top companies in the energy sector is ExxonMobil Corporation (XOM), which is one of the largest publicly traded international oil and gas companies in the world. ExxonMobil engages in exploration, production, transportation, and sale of crude oil, natural gas, and petroleum products. With operations in over 70 countries, ExxonMobil has a significant global presence and is a major player in the energy industry. As of 2021, ExxonMobil also has investments in alternative energy technologies such as biofuels and carbon capture and storage.

By using an ARIMAX+ARCH/GARCH model to analyze the stock price behavior of ExxonMobil, we can gain insights into how macroeconomic factors impact the company’s performance. For example, an increase in GDP growth rate or a decrease in unemployment rate may lead to increased consumer spending and a rise in ExxonMobil’s stock price. Conversely, an increase in inflation or interest rates may lead to a decrease in consumer spending and a decline in ExxonMobil’s stock price.

Time series Plot

  • ExxonMobil
  • Differentitaed Chart Series
  • GDP Growth
  • Inflation
  • Interest
  • Unemployment
Code
# get data
options("getSymbols.warning4.0"=FALSE)
options("getSymbols.yahoo.warning"=FALSE)


data.info = getSymbols("XOM",src='yahoo', from = '2010-01-01',to = "2023-03-01",auto.assign = FALSE)
data = getSymbols("XOM",src='yahoo', from = '2010-01-01',to = "2023-03-01")
df <- data.frame(Date=index(XOM),coredata(XOM))

# create Bollinger Bands
bbands <- BBands(XOM[,c("XOM.High","XOM.Low","XOM.Close")])

# join and subset data
df <- subset(cbind(df, data.frame(bbands[,1:3])), Date >= "2010-01-01")


# colors column for increasing and decreasing
for (i in 1:length(df[,1])) {
  if (df$XOM.Close[i] >= df$XOM.Open[i]) {
      df$direction[i] = 'Increasing'
  } else {
      df$direction[i] = 'Decreasing'
  }
}

i <- list(line = list(color = '#F9E79F'))
d <- list(line = list(color = '#7F7F7F'))

# plot candlestick chart

fig <- df %>% plot_ly(x = ~Date, type="candlestick",
          open = ~XOM.Open, close = ~XOM.Close,
          high = ~XOM.High, low = ~XOM.Low, name = "XOM",
          increasing = i, decreasing = d) 
fig <- fig %>% add_lines(x = ~Date, y = ~up , name = "B Bands",
            line = list(color = '#ccc', width = 0.5),
            legendgroup = "Bollinger Bands",
            hoverinfo = "none", inherit = F) 
fig <- fig %>% add_lines(x = ~Date, y = ~dn, name = "B Bands",
            line = list(color = '#ccc', width = 0.5),
            legendgroup = "Bollinger Bands", inherit = F,
            showlegend = FALSE, hoverinfo = "none") 
fig <- fig %>% add_lines(x = ~Date, y = ~mavg, name = "Mv Avg",
            line = list(color = '#C052B3', width = 0.5),
            hoverinfo = "none", inherit = F) 
fig <- fig %>% layout(yaxis = list(title = "Price"))

# plot volume bar chart
fig2 <- df 
fig2 <- fig2 %>% plot_ly(x=~Date, y=~XOM.Volume, type='bar', name = "XOM Volume",
          color = ~direction, colors = c('#F9E79F','#7F7F7F')) 
fig2 <- fig2 %>% layout(yaxis = list(title = "Volume"))

# create rangeselector buttons
rs <- list(visible = TRUE, x = 0.5, y = -0.055,
           xanchor = 'center', yref = 'paper',
           font = list(size = 9),
           buttons = list(
             list(count=1,
                  label='RESET',
                  step='all'),
             list(count=3,
                  label='3 YR',
                  step='year',
                  stepmode='backward'),
             list(count=1,
                  label='1 YR',
                  step='year',
                  stepmode='backward'),
             list(count=1,
                  label='1 MO',
                  step='month',
                  stepmode='backward')
           ))

# subplot with shared x axis
fig <- subplot(fig, fig2, heights = c(0.7,0.2), nrows=2,
             shareX = TRUE, titleY = TRUE)
fig <- fig %>% layout(title = paste("ExxonMobil Stock Price: January 2010 - March 2023"),
         xaxis = list(rangeselector = rs),
         legend = list(orientation = 'h', x = 0.5, y = 1,
                       xanchor = 'center', yref = 'paper',
                       font = list(size = 10),
                       bgcolor = 'transparent'))

fig
Code
log(data.info$`XOM.Adjusted`) %>% diff() %>% chartSeries(theme=chartTheme('white'),up.col='#F9E79F')

Code
#import the data
gdp <- read.csv("DATA/RAW DATA/gdp-growth.csv")

#change date format
gdp$Date <- as.Date(gdp$DATE , "%m/%d/%Y")

#drop DATE column
gdp <- subset(gdp, select = -c(1))

#export the cleaned data
gdp_clean <- gdp
write.csv(gdp_clean, "DATA/CLEANED DATA/gdp_clean_data.csv", row.names=FALSE)

#plot gdp growth rate 
fig <- plot_ly(gdp, x = ~Date, y = ~value, type = 'scatter', mode = 'lines',line = list(color = 'rgb(240, 128, 128)'))
fig <- fig %>% layout(title = "U.S GPD Growth Rate: 2010 - 2022",xaxis = list(title = "Time"),yaxis = list(title ="GDP Growth Rate"))
fig
Code
#import the data
inflation_rate <- read.csv("DATA/RAW DATA/inflation-rate.csv")

#cleaning the data
#remove unwanted columns
inflation_rate_clean <- subset(inflation_rate, select = -c(1,HALF1,HALF2))

#convert the data to time series data
inflation_data_ts <- ts(as.vector(t(as.matrix(inflation_rate_clean))), start=c(2010,1), end=c(2023,2), frequency=12)

#export the data
write.csv(inflation_rate_clean, "DATA/CLEANED DATA/inflation_rate_clean_data.csv", row.names=FALSE)


#plot inflation rate 
fig <- autoplot(inflation_data_ts, ylab = "Inflation Rate", color="#FFA07A")+ggtitle("U.S Inflation Rate: January 2010 - February 2023")+theme_bw()
ggplotly(fig)
Code
#import the data
interest_data <- read.csv("DATA/RAW DATA/interest-rate.csv")

#change date format
interest_data$Date <- as.Date(interest_data$Date , "%m/%d/%Y")

#export the cleaned data
interest_clean_data <- interest_data
write.csv(interest_clean_data, "DATA/CLEANED DATA/interest_rate_clean_data.csv", row.names=FALSE)

#plot interest rate 
fig <- plot_ly(interest_data, x = ~Date, y = ~value, type = 'scatter', mode = 'lines',line = list(color='rgb(219, 112, 147)'))
fig <- fig %>% layout(title = "U.S Interest Rate: January 2010 - March 2023",xaxis = list(title = "Time"),yaxis = list(title ="Interest Rate"))
fig
Code
#import the data
unemployment_rate <- read.csv("DATA/RAW DATA/unemployment-rate.csv")

#change date format
unemployment_rate$Date <- as.Date(unemployment_rate$Date , "%m/%d/%Y")

# export the data
write.csv(unemployment_rate, "DATA/CLEANED DATA/unemployment_rate_clean_data.csv", row.names=FALSE)

#plot unemployment rate 
fig <- plot_ly(unemployment_rate, x = ~Date, y = ~Value, type = 'scatter', mode = 'lines',line = list(color = 'rgb(189, 183, 107)'))
fig <- fig %>% layout(title = "U.S Unemployment Rate: January 2010 - March 2023",xaxis = list(title = "Time"),yaxis = list(title ="Unemployment Rate"))
fig

ExxonMobil (XOM) is one of the largest integrated oil and gas companies in the world. From 2010 to 2023, XOM’s stock price experienced some significant fluctuations.

In the early part of the decade, XOM’s stock price steadily increased as the global demand for oil and gas remained strong. However, the company’s stock price began to decline in mid-2014, as the oversupply of oil and gas in the global market led to a drop in prices. This decline was further exacerbated by the COVID-19 pandemic in 2020, which caused a sharp drop in global demand for oil and gas.

Despite these challenges, XOM’s stock price has shown signs of recovery in recent years, as the company has taken steps to reduce its costs and increase its focus on natural gas and chemicals. In addition, the company has made significant investments in renewable energy, which could help it to diversify its revenue streams in the long term.

Since mid 2020, ExxonMobil’s stock price has experienced some volatility, likely due to a combination of factors such as global economic uncertainty and fluctuations in consumer demand for the company’s products.

As discussed before, the macroeconomic factors of GDP growth, inflation, interest rates, and unemployment rate are closely interrelated and play a crucial role in the overall health and stability of an economy. From 2010 to 2023, the global economy experienced a mix of ups and downs, with periods of strong GDP growth followed by slowdowns and recessions.

The second plot shows the first difference of the logarithm of the adjusted ExxonMobil stock price. Taking the first difference removes any long-term trends and transforms the time series into a stationary process. From the plot, we can observe that the first difference of the logarithm of the ExxonMobil stock price appears to be stationary, as the mean and variance are roughly constant over time.

Enodogenous and Exogenous Variables

  • Plot
  • Correlation Heatmap
  • CCF GDP
  • CCF Interest
  • CCF Inflation
  • CCF Unemployment
Code
numeric_data <- c("XOM.Adjusted","gdp", "interest", "inflation", "unemployment")
numeric_data <- final[, numeric_data]
normalized_data_numeric <- scale(numeric_data)
normalized_data <- ts(normalized_data_numeric, start = c(2010, 1), end = c(2021,10),frequency = 4)
ts_plot(normalized_data,
        title = "Normalized Time Series Data for XOM Stock and Macroeconomic Variables",
        Ytitle = "Normalized Values",
        Xtitle = "Year")
Code
# Get upper triangle of the correlation matrix
get_upper_tri <- function(cormat){
    cormat[lower.tri(cormat)]<- NA
    return(cormat)
}
cormat <- round(cor(normalized_data_numeric),2)

upper_tri <- get_upper_tri(cormat)

melted_cormat <- melt(upper_tri, na.rm = TRUE)
# Create a ggheatmap
ggheatmap <- ggplot(melted_cormat, aes(Var2, Var1, fill = value))+
 geom_tile(color = "white")+
 scale_fill_gradient2(low = "blue", high = "red", mid = "white", 
   midpoint = 0, limit = c(-1,1), space = "Lab", 
    name="Pearson\nCorrelation") +
  theme_minimal()+ # minimal theme
 theme(axis.text.x = element_text(angle = 45, vjust = 1, 
    size = 12, hjust = 1))+
 coord_fixed()

ggheatmap + 
geom_text(aes(Var2, Var1, label = value), color = "black", size = 4) +
theme(
  axis.title.x = element_blank(),
  axis.title.y = element_blank(),
  panel.grid.major = element_blank(),
  panel.border = element_blank(),
  panel.background = element_blank(),
  axis.ticks = element_blank(),
  legend.justification = c(1, 0),
  legend.position = c(0.6, 0.7),
  legend.direction = "horizontal")+
  guides(fill = guide_colorbar(barwidth = 7, barheight = 1,
                title.position = "top", title.hjust = 0.5))

Code
par(mfrow=c(1,1))
ccf_result <- ccf(normalized_data[, c("XOM.Adjusted")], normalized_data[, c("gdp")], 
    lag.max = 300,
    main = "Cros-Correlation Plot for XOM Stock Price and GDP Growth Rate ",
    ylab = "CCF")

Code
cat("The sum of cross correlation function is", sum(abs(ccf_result$acf)))
The sum of cross correlation function is 8.584101
Code
par(mfrow=c(1,1))
ccf_result <- ccf(normalized_data[, c("XOM.Adjusted")], normalized_data[, c("interest")], 
    lag.max = 300,
    main = "Cros-Correlation Plot for XOM Stock Price and Interest Rate",
    ylab = "CCF")

Code
cat("The sum of cross correlation function is", sum(abs(ccf_result$acf)))
The sum of cross correlation function is 13.78063
Code
par(mfrow=c(1,1))
ccf_result <- ccf(normalized_data[, c("XOM.Adjusted")], normalized_data[, c("inflation")], 
    lag.max = 300,
    main = "Cros-Correlation Plot for XOM Stock Price and Inflation Rate",
    ylab = "CCF")

Code
cat("The sum of cross correlation function is", sum(abs(ccf_result$acf)))
The sum of cross correlation function is 13.02693
Code
par(mfrow=c(1,1))
ccf_result <- ccf(normalized_data[, c("XOM.Adjusted")], normalized_data[, c("unemployment")], 
    lag.max = 300,
    main = "Cros-Correlation Plot for XOM Stock Priceand Unemployment Rate",
    ylab = "CCF")

Code
cat("The sum of cross correlation function is", sum(abs(ccf_result$acf)))
The sum of cross correlation function is 11.6615

The Normalized Time Series Data for Stock Price and Macroeconomic Variables plot shows the same variables as the first plot but has been normalized to a common range of 0 to 1 using the scale() function in R, which standardizes the variables to have a mean of 0 and a standard deviation of 1. The heatmap analysis of the normalized data reveals that inflation and unemployment rate exhibit strong positive correlations with the stock price indices, indicating that these variables may significantly influence stock price movements. On the other hand, weaker correlations were observed between the stock price indices and GDP and interest rates, suggesting that these variables may have less impact on stock price fluctuations. The cross-correlation feature plots confirm these findings, indicating that inflation and unemployment rate are more suitable feature variables for the ARIMAX model when predicting ExxonMobil movements.

Final Exogenous variables: Macroeconomic indicators: Inflation rate and unemployment rate.

Enodogenous and Exogenous Variables Plot

  • Plot
  • Check the stationarity
Code
final_data <- final %>%dplyr::select( Date,XOM.Adjusted, inflation,unemployment)
numeric_data <- c("XOM.Adjusted", "inflation","unemployment")
numeric_data <- final_data[, numeric_data]
normalized_data_numeric <- scale(numeric_data)
normalized_numeric_df <- data.frame(normalized_data_numeric)
normalized_data_ts <- ts(normalized_data_numeric, start = c(2010, 1), frequency = 4)

autoplot(normalized_data_ts, facets=TRUE) +
  xlab("Year") + ylab("") +
  ggtitle("ExxonMobil Stock Price, Inflation Rate and Unemployment Rate in USA 2010-2023")

Code
# Convert your multivariate time series data to a matrix
final_data_ts_multivariate <- as.matrix(normalized_data_ts)

# Check for stationarity using Phillips-Perron test
phillips_perron_test <- ur.pp(final_data_ts_multivariate)  
summary(phillips_perron_test)

################################## 
# Phillips-Perron Unit Root Test # 
################################## 

Test regression with intercept 


Call:
lm(formula = y ~ y.l1)

Residuals:
    Min      1Q  Median      3Q     Max 
-2.6785 -0.2163 -0.0613  0.1641  4.4054 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 0.003019   0.049779   0.061    0.952    
y.l1        0.780682   0.050284  15.526   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.6197 on 153 degrees of freedom
Multiple R-squared:  0.6117,    Adjusted R-squared:  0.6092 
F-statistic:   241 on 1 and 153 DF,  p-value: < 2.2e-16


Value of test-statistic, type: Z-alpha  is: -35.294 

         aux. Z statistics
Z-tau-mu            0.0608

The results of the Phillips-Perron unit root test indicate strong evidence against the null hypothesis of a unit root, as the p-value for the coefficient of the lagged variable is less than the significance level of 0.05. This suggests that the variable y, which is being tested for stationarity, is likely stationary. Furthermore, the test statistic Z-tau-mu is 0.0608, which is smaller than the critical value of Z-alpha (-35.294), providing further evidence of stationarity.

To determine whether the linear model requires an ARCH model, an ARCH test is conducted. The ACF and PACF plots are also used to identify suitable model values.

Model Fitting

  • Plot
  • ARCH Test
  • ACF Plot
  • PACF Plot
Code
normalized_numeric_df$XOM.Adjusted<-ts(normalized_numeric_df$XOM.Adjusted,star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)
normalized_numeric_df$inflation<-ts(normalized_numeric_df$inflation,star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)
normalized_numeric_df$unemployment<-ts(normalized_numeric_df$unemployment,star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)

fit <- lm(XOM.Adjusted ~ inflation+unemployment, data=normalized_numeric_df)
fit.res<-ts(residuals(fit),star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)
############## Then look at the residuals ############
returns <- fit.res  %>% diff()
autoplot(returns)+ggtitle("Linear Model Returns")

Code
byd.archTest <- ArchTest(fit.res, lags = 1, demean = TRUE)
byd.archTest

    ARCH LM-test; Null hypothesis: no ARCH effects

data:  fit.res
Chi-squared = 6.3634, df = 1, p-value = 0.01165
Code
ggAcf(returns) +ggtitle("ACF for returns")

Code
ggPacf(returns) +ggtitle("PACF for returns")

The ARCH LM-test was conducted with the null hypothesis of no ARCH effects. The test resulted in a chi-squared value of 36.896 with one degree of freedom, and a very low p-value of 0.01165. This suggests strong evidence against the null hypothesis, indicating the presence of ARCH effects in the data.

Based on the ACF and PACF plots, it appears that there is some significant autocorrelation and partial autocorrelation at multiple lags, which suggests that an ARIMA model may not be sufficient to capture the time series behavior. Additionally, the values for p and q appear to be relatively high, with p = 0 and q = 0 being suggested by the plots.

ARIMAX Model

  • Auto Arima Model
  • Auto Arima Residuals
Code
xreg <- cbind(Inflation = normalized_data_ts[, "inflation"],
              Unemployment = normalized_data_ts[, "unemployment"])
fit.auto <- auto.arima(normalized_data_ts[, "XOM.Adjusted"], xreg = xreg)
summary(fit.auto)
Series: normalized_data_ts[, "XOM.Adjusted"] 
Regression with ARIMA(0,1,0) errors 

Coefficients:
      Inflation  Unemployment
         0.4785       -0.3455
s.e.     0.2235        0.1014

sigma^2 = 0.2642:  log likelihood = -37.4
AIC=80.8   AICc=81.32   BIC=86.6

Training set error measures:
                     ME      RMSE       MAE      MPE     MAPE      MASE
Training set 0.02751447 0.4989419 0.3970164 40.31885 217.5686 0.4290686
                   ACF1
Training set -0.1780107
Code
checkresiduals(fit.auto)


    Ljung-Box test

data:  Residuals from Regression with ARIMA(0,1,0) errors
Q* = 5.972, df = 8, p-value = 0.6504

Model df: 0.   Total lags used: 8

Based on the results of the auto.arima function, the suggested best model is ARIMA(0,1,0) which is the same as the manual choosen arima model. So, we can then proceed to choose the best GARCH model using ARIMA(0,1,0) as the base model.

We can then proceed to choose the best GARCH model using ARIMA(0,1,0) as the base model.

Squared Residuals

  • Plot
  • ACF Plot
  • PACF Plot
  • GRACH(1,1)
Code
fit <- lm(XOM.Adjusted ~ inflation+unemployment, data=normalized_numeric_df)
fit.res<-ts(residuals(fit),star=decimal_date(as.Date("2010-01-01",format = "%Y-%m-%d")),frequency = 4)
fit <- Arima(fit.res,order=c(0,1,0))
res=fit$res
plot(res^2,main='Squared Residuals')

Code
acf(res^2,24, main = "ACF Residuals Square")

Code
pacf(res^2,24, main = "PACF Residuals Square")

Code
summary(garchFit(~garch(1,1),res, trace=F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 1), data = res, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 1)
<environment: 0x10aaacf20>
 [data = res]

Conditional Distribution:
 norm 

Coefficient(s):
       mu      omega     alpha1      beta1  
0.0090957  0.0010266  0.0948422  0.9289556  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu      0.009096    0.066163    0.137    0.891    
omega   0.001027    0.025018    0.041    0.967    
alpha1  0.094842    0.089034    1.065    0.287    
beta1   0.928956    0.128327    7.239 4.52e-13 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 -38.1069    normalized:  -0.7328249 

Description:
 Tue Jan  9 21:00:15 2024 by user:  


Standardised Residuals Tests:
                                Statistic p-Value  
 Jarque-Bera Test   R    Chi^2  1.133828  0.5672733
 Shapiro-Wilk Test  R    W      0.9863453 0.8107262
 Ljung-Box Test     R    Q(10)  6.442417  0.7768255
 Ljung-Box Test     R    Q(15)  10.96014  0.7554148
 Ljung-Box Test     R    Q(20)  13.75258  0.8428156
 Ljung-Box Test     R^2  Q(10)  11.8193   0.2973321
 Ljung-Box Test     R^2  Q(15)  13.76539  0.5433921
 Ljung-Box Test     R^2  Q(20)  16.25387  0.7007548
 LM Arch Test       R    TR^2   13.29826  0.3477401

Information Criterion Statistics:
     AIC      BIC      SIC     HQIC 
1.619496 1.769592 1.608751 1.677039 

From the squared residuals of the best ARIMA model, it can be observed that the ACF plot and PACF plot indicate that the residuals are not autocorrelated and are white noise, indicating a good fit of the model. Based on the squared residuals of the best ARIMA model, we can see that the ACF and PACF plots indicate that most of the values lie between the blue lines. Additionally, the p-value is 1 and q-value is 1. This suggests that the model has a good fit and that there is no significant autocorrelation or partial autocorrelation in the residuals.

The bst model is ARIMA(0,1,0) and GARCH(1,1). #### Best Model

  • ARIMA Model
  • GARCH Model
  • Volatility
Code
#fiting an ARIMA model to the Inflation variable
inflation_fit<-auto.arima(normalized_numeric_df$inflation) 
finflation<-forecast(inflation_fit)

#fitting an ARIMA model to the Unemployment variable
unemployment_fit<-auto.arima(normalized_numeric_df$unemployment) 
funemployment<-forecast(unemployment_fit)

# best model fit for forcasting
xreg <- cbind(Inflation = normalized_data_ts[, "inflation"],
              Unemployment = normalized_data_ts[, "unemployment"])



summary(arima.fit<-Arima(normalized_data_ts[, "XOM.Adjusted"],order=c(0,1,0),xreg=xreg),include.drift = TRUE)
Series: normalized_data_ts[, "XOM.Adjusted"] 
Regression with ARIMA(0,1,0) errors 

Coefficients:
      Inflation  Unemployment
         0.4785       -0.3455
s.e.     0.2235        0.1014

sigma^2 = 0.2642:  log likelihood = -37.4
AIC=80.8   AICc=81.32   BIC=86.6

Training set error measures:
                     ME      RMSE       MAE      MPE     MAPE      MASE
Training set 0.02751447 0.4989419 0.3970164 40.31885 217.5686 0.4290686
                   ACF1
Training set -0.1780107
Code
summary(final.fit <- garchFit(~garch(1,1), res,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 1), data = res, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 1)
<environment: 0x12ea4fb90>
 [data = res]

Conditional Distribution:
 norm 

Coefficient(s):
       mu      omega     alpha1      beta1  
0.0090957  0.0010266  0.0948422  0.9289556  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu      0.009096    0.066163    0.137    0.891    
omega   0.001027    0.025018    0.041    0.967    
alpha1  0.094842    0.089034    1.065    0.287    
beta1   0.928956    0.128327    7.239 4.52e-13 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 -38.1069    normalized:  -0.7328249 

Description:
 Tue Jan  9 21:00:15 2024 by user:  


Standardised Residuals Tests:
                                Statistic p-Value  
 Jarque-Bera Test   R    Chi^2  1.133828  0.5672733
 Shapiro-Wilk Test  R    W      0.9863453 0.8107262
 Ljung-Box Test     R    Q(10)  6.442417  0.7768255
 Ljung-Box Test     R    Q(15)  10.96014  0.7554148
 Ljung-Box Test     R    Q(20)  13.75258  0.8428156
 Ljung-Box Test     R^2  Q(10)  11.8193   0.2973321
 Ljung-Box Test     R^2  Q(15)  13.76539  0.5433921
 Ljung-Box Test     R^2  Q(20)  16.25387  0.7007548
 LM Arch Test       R    TR^2   13.29826  0.3477401

Information Criterion Statistics:
     AIC      BIC      SIC     HQIC 
1.619496 1.769592 1.608751 1.677039 
Code
ht <- final.fit@h.t #a numeric vector with the conditional variances (h.t = sigma.t^delta)

#############################
data=data.frame(final)
data$Date<-as.Date(data$Date,"%Y-%m-%d")


data2= data.frame(ht,data$Date)
ggplot(data2, aes(y = ht, x = data.Date)) + geom_line(col = '#F9E79F') + ylab('Conditional Variance') + xlab('Date')

From the ARIMA(0,1,0), we see that the training set error measures also suggest a good fit, with low mean absolute error, root mean squared error, and autocorrelation of the residuals. GATCH(1,1) model model is used to estimate the volatility of the standardized residuals of the previous regression model. The model includes a mean equation that estimates the mean of the residuals and a variance equation that models the conditional variance of the residuals. The coefficients of the mean equation suggest that the mean of the residuals is close to zero. The variance equation coefficients suggest that the conditional variance of the residuals is dependent on the past conditional variances and the past squared standardized residuals. The model’s the AIC, BIC, SIC, and HQIC values are all relatively low, indicating a good fit of the model. The standardized residuals tests indicate that the residuals are approximately normally distributed and that there is no significant autocorrelation in the residuals.

The volatility of the model seems high in 2020 but has decreased gradually in the past few months. This could indicate that the asset’s price was experiencing a lot of fluctuations in 2020, but the market has stabilized recently.

Model Diagnostics

  • Residuals
  • QQ Plot
  • Box Test
Code
fit2<-garch(res,order=c(1,1),trace=F)
checkresiduals(fit2) 
Warning in modeldf.default(object): Could not find appropriate degrees of
freedom for this model.

Code
qqnorm(fit2$residuals, pch = 1)
qqline(fit2$residuals, col = "blue", lwd = 2)

Code
Box.test (fit2$residuals, type = "Ljung")

    Box-Ljung test

data:  fit2$residuals
X-squared = 0.88262, df = 1, p-value = 0.3475

The ACF plot of the residuals shows all the values between the blue lines, which indicates that the residuals are not significantly autocorrelated. The range of values for the residual plot between -2 and 2 is considered acceptable. Additionally, the QQ plot of the residuals shows a linear plot on the line, which is another good indication that the residuals are normally distributed. The QQ plot is a valuable tool to assess if the residuals follow a normal distribution, and in this case, the plot suggests that the residuals do indeed follow a normal distribution.

The Box-Ljung test, a p-value of 0.3475 indicates that the model’s residuals are not significantly autocorrelated, meaning that the model has captured most of the information in the data. This result is good because it suggests that the model is a good fit for the data and has accounted for most of the underlying patterns in the data. Therefore, we can rely on the model’s predictions and use them to make informed decisions.

Forecast

Code
predict(final.fit, n.ahead = 5, plot=TRUE)

  meanForecast meanError standardDeviation lowerInterval upperInterval
1  0.009095674 0.7486581         0.7486581     -1.458247      1.476439
2  0.009095674 0.7581913         0.7581913     -1.476932      1.495123
3  0.009095674 0.7678288         0.7678288     -1.495821      1.514012
4  0.009095674 0.7775718         0.7775718     -1.514917      1.533108
5  0.009095674 0.7874219         0.7874219     -1.534223      1.552414

The forecasted plot is based on the best model ARIMAX(0,1,0)+GARCH(1,1). This model takes into account the autoregressive and moving average components of the data, as well as the impact off exogenous variables on the time series. Additionally, the GARCH component of the model accounts for the volatility clustering in the data. Overall, this model is well-suited to make accurate predictions about future values of the time series.

Equation of the Model

The equation of the ARIMAX(0,1,0) model is:

\(Y(t) = Y(t-1) + \epsilon(t)\)

where \(Y(t)\) is the time series variable and \(\epsilon(t)\) is the error term.

The equation of the GARCH(1,1) model is:

\(\sigma^2(t) = \alpha_0+\alpha_1\epsilon_t^2(t-1)+ \beta_1\sigma^2(t-1)\)

where \(\sigma^2_t\) is the conditional variance at time \(t\), \(\alpha_0\) is a constant, \(\alpha_1\) and \(\beta_1\) are the parameters, and \(\epsilon_t\) is the error term.

The combined equation of the ARIMAX(0,1,0)+GARCH(1,1) model is:

\(Y(t) = Y(t-1) + \epsilon(t)\)

\(\epsilon(t) = \sigma(t) * \epsilon~(t)\)

\(\sigma^2(t) = \alpha_0+\alpha_1\epsilon_t^2(t-1)+ \beta_1\sigma^2(t-1)\)